582 research outputs found

    General spherically symmetric elastic stars in Relativity

    Get PDF
    The relativistic theory of elasticity is reviewed within the spherically symmetric context with a view towards the modeling of star interiors possessing elastic properties such as theones expected in neutron stars. Emphasis is placed on generality in the main sections of the paper, and the results are then applied to specific examples. Along the way, a few general results for spacetimes admitting isometries are deduced, and their consequences are fully exploited in the case of spherical symmetry relating them next to the the case in which the material content of the spacetime is some elastic material. This paper extends and generalizes the pioneering work by Magli and Kijowski [1], Magli [2] and [3], and complements, in a sense, that by Karlovini and Samuelsson in their interesting series of papers [4], [5] and [6].Comment: 23 page

    Permutation invariance and uncertainty in multitemporal image super-resolution

    Get PDF
    Recent advances have shown how deep neural networks can be extremely effective at super-resolving remote sensing imagery, starting from a multitemporal collection of low-resolution images. However, existing models have neglected the issue of temporal permutation, whereby the temporal ordering of the input images does not carry any relevant information for the super-resolution task and causes such models to be inefficient with the, often scarce, ground truth data that available for training. Thus, models ought not to learn feature extractors that rely on temporal ordering. In this paper, we show how building a model that is fully invariant to temporal permutation significantly improves performance and data efficiency. Moreover, we study how to quantify the uncertainty of the super-resolved image so that the final user is informed on the local quality of the product. We show how uncertainty correlates with temporal variation in the series, and how quantifying it further improves model performance. Experiments on the Proba-V challenge dataset show significant improvements over the state of the art without the need for self-ensembling, as well as improved data efficiency, reaching the performance of the challenge winner with just 25% of the training data

    Convolutional neural networks for on-board cloud screening

    Get PDF
    AcloudscreeningunitonasatelliteplatformforEarthobservationcanplayanimportant role in optimizing communication resources by selecting images with interesting content while skipping those that are highly contaminated by clouds. In this study, we address the cloud screening problem by investigating an encoder–decoder convolutional neural network (CNN). CNNs usually employ millions of parameters to provide high accuracy; on the other hand, the satellite platform imposes hardware constraints on the processing unit. Hence, to allow an onboard implementation, we investigate experimentally several solutions to reduce the resource consumption by CNN while preserving its classification accuracy. We experimentally explore approaches such as halving the computation precision, using fewer spectral bands, reducing the input size, decreasing the number of network filters and also making use of shallower networks, with the constraint that the resulting CNN must have sufficiently small memory footprint to fit the memory of a low-power accelerator for embedded systems. The trade-off between the network performance and resource consumption has been studied over the publicly available SPARCS dataset. Finally, we show that the proposed network can be implemented on the satellite board while performing with reasonably high accuracy compared with the state-of-the-art

    Learning Localized Representations of Point Clouds with Graph-Convolutional Generative Adversarial Networks

    Get PDF
    Point clouds are an important type of geometric data generated by 3D acquisition devices, and have widespread use in computer graphics and vision. However, learning representations for point clouds is particularly challenging due to their nature as being an unordered collection of points irregularly distributed in 3D space. Recently, supervised and semisupervised problems for point clouds leveraged graph convolution, a generalization of the convolution operation for data defined over graphs. This operation has been shown to be very successful at extracting localized features from point clouds. In this paper, we study the unsupervised problem of a generative model exploiting graph convolution. Employing graph convolution operations in generative models is not straightforward and it poses some unique challenges. In particular, we focus on the generator of a GAN, where the graph is not known in advance as it is the very output of the generator. We show that the proposed architecture can learn to generate the graph and the features simultaneously. We also study the problem of defining an upsampling layer in the graph-convolutional generator, proposing two methods that respectively learn to exploit a multi-resolution or self-similarity prior to sample the data distribution

    NIR image colorization with graph-convolutional neural networks

    Get PDF
    Colorization of near-infrared (NIR) images is a challenging problem due to the different material properties at the infared wavelenghts, thus reducing the correlation with visible images. In this paper, we study how graph-convolutional neural networks allow exploiting a more powerful inductive bias than standard CNNs, in the form of non-local self-similiarity. Its impact is evaluated by showing how training with mean squared error only as loss leads to poor results with a standard CNN, while the graph-convolutional network produces significantly sharper and more realistic colorizations

    Shot-based object retrieval from video with compressed Fisher vectors

    Get PDF
    This paper addresses the problem of retrieving those shots from a database of video sequences that match a query image. Existing architectures are mainly based on Bag of Words model, which consists in matching the query image with a high-level representation of local features extracted from the video database. Such architectures lack however the capability to scale up to very large databases. Recently, Fisher Vectors showed promising results in large scale image retrieval problems, but it is still not clear how they can be best exploited in video-related applications. In our work, we use compressed Fisher Vectors to represent the video-shots and we show that inherent correlation between video-frames can be proficiently exploited. Experiments show that our proposal enables better performance for lower computational requirements than similar architectures
    • …
    corecore